Somehow I'm not able to decrypt our ml models on my machine. It does not matter:
If I clean the build / delete the build folder
If it's a local build or a build downloaded from our build server
I log in as a different user
I reboot my system (15.4.1 (24E263)
I use a different network
Re-generate the encryption keys.
I'm the only one in my team confronted with this issue. Using the encrypted models works fine for everyone else.
As soon as our application tries to load the bundled ml model the following error is logged and returned:
Could not create persistent key blob for CD49E04F-1A42-4FBE-BFC1-2576B89EC233 : error=Error Domain=com.apple.CoreML Code=9 "Failed to generate key request for CD49E04F-1A42-4FBE-BFC1-2576B89EC233 with error: -42908"
Error code 9 points to a decryption issue, but offers no useful pointers and suggests that some sort of network request needs to be made in order to decrypt our models.
/*! Core ML throws/returns this error when the framework encounters an error in the model
decryption subsystem.
The typical cause for this error is in the key server configuration and the client application
cannot do much about it.
For example, a model loading method will throw/return the error when it uses incorrect model
decryption key.
*/
MLModelErrorModelDecryption API_AVAILABLE(macos(11.0), ios(14.0), watchos(7.0), tvos(14.0)) = 9,
I could not find a reference to error '-42908' anywhere.
ChatGPT just lied to me, as usual...
How do can I resolve this or diagnose this further?
Thanks.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I have created this basic swift program:
let session = LanguageModelSession(
model: .default,
instructions: "bla bla bla.")
I want to understand what I can put in model parameter (instead of .default).
How can I choose between on-device local model (.default I suppose) and apple private cloud model (or any other ?)
Thanks
My sample app has been working with the following code:
func call(arguments: Arguments) async throws -> ToolOutput {
var temp:Int
switch arguments.city {
case .singapore: temp = Int.random(in: 30..<40)
case .china: temp = Int.random(in: 10..<30)
}
let content = GeneratedContent(temp)
let output = ToolOutput(content)
return output
}
However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
Hi,
I'm trying to use the new RecognizeDocumentsRequest from the Vision Framework to read a receipt. It looks very promising by being able to read paragraphs, lines and detect data. So far it unfortunately seems to read every line on the receipt as a paragraph and when there is more space on one line it creates two paragraphs.
Is there perhaps an Apple Engineer who knows if this is expected behaviour or if I should file a Feedback for this?
Code setup:
let request = RecognizeDocumentsRequest()
let observations = try await request.perform(on: image)
guard let document = observations.first?.document else {
return
}
for paragraph in document.paragraphs {
print(paragraph.transcript)
for data in paragraph.detectedData {
switch data.match.details {
case .phoneNumber(let data):
print("Phone: \(data)")
case .postalAddress(let data):
print("Postal: \(data)")
case .calendarEvent(let data):
print("Calendar: \(data)")
case .moneyAmount(let data):
print("Money: \(data)")
case .measurement(let data):
print("Measurement: \(data)")
default:
continue
}
}
}
See attached image as an example of a receipt I'd like to parse. The top 3 lines are the name, street, and postal code + city. These are all separate paragraphs. Checking on detectedData does see the street (2nd line) as PostalAddress, but not the complete address. Might that be a location thing since it's a Dutch address.
And lower on the receipt it sees the block with "Pomp 1 95 Ongelood" and the things below also as separate paragraphs. First picking up the left side and after that the right side. So it's something like this:
*
Pomp 1
Volume
Prijs
€
TOTAAL
*
BTW
Netto
21.00 %
95 Ongelood
41,90 l
1.949/ 1
81.66
€
14.17
67.49
HI,
I've been modifying the Camera sample app found here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... in the processpreview images, I am calling in to the Vision APis to either detect a person or object, then I'm using the segmentation mask to extract the person and composite them onto a different background with some other filters. I am using coreimage to filter the CIImages, and converting and displaying as a SwiftUI Image. When running on my IPhone, it works fine. When running on my Iphone with the debugger, it crashes within a few seconds... Attached is a screenshot. At the top is an EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>:
This was working fine a couple of days ago.. Not sure why it's popping up now. Am I correct in interpreting this as an LLDB issue? How do I fix it?
Hey,
Would be great to have an equivalent of toolCallId for both toolCall and toolResult in the transcript. Otherwise, it is hard to connect tool calls with their respective responses, when there were multiple parallel calls to the same tool.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello,
We have been encountering a persistent crash in our application, which is deployed exclusively on iPad devices. The crash occurs in the following code block:
let requestHandler = ImageRequestHandler(paddedImage)
var request = CoreMLRequest(model: model)
request.cropAndScaleAction = .scaleToFit
let results = try await requestHandler.perform(request)
The client using this code is wrapped inside an actor, following Swift concurrency principles.
The issue has been consistently reproduced across multiple iPadOS versions, including:
iPad OS - 18.4.0
iPad OS - 18.4.1
iPad OS - 18.5.0
This is the crash log -
Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
0 libobjc.A.dylib 0x7b98 objc_retain + 16
1 libobjc.A.dylib 0x7b98 objc_retain_x0 + 16
2 libobjc.A.dylib 0xbf18 objc_getProperty + 100
3 Vision 0x326300 -[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148
4 Vision 0x3273b0 -[VNCoreMLTransformer processRegionOfInterest:croppedPixelBuffer:options:qosClass:warningRecorder:error:progressHandler:] + 748
5 Vision 0x2ccdcc __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_5 + 132
6 Vision 0x14600 VNExecuteBlock + 80
7 Vision 0x14580 __76+[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:]_block_invoke + 56
8 libdispatch.dylib 0x6c98 _dispatch_block_sync_invoke + 240
9 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
10 libdispatch.dylib 0x11728 _dispatch_lane_barrier_sync_invoke_and_complete + 56
11 libdispatch.dylib 0x7fac _dispatch_sync_block_with_privdata + 452
12 Vision 0x14110 -[VNControlledCapacityTasksQueue dispatchSyncByPreservingQueueCapacity:] + 60
13 Vision 0x13ffc +[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:] + 324
14 Vision 0x2ccc80 __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_4 + 336
15 Vision 0x14600 VNExecuteBlock + 80
16 Vision 0x2cc98c __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_3 + 256
17 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
18 libdispatch.dylib 0x6ab0 _dispatch_block_invoke_direct + 284
19 Vision 0x2cc454 -[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 632
20 Vision 0x2cd14c __111-[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke + 124
21 Vision 0x14600 VNExecuteBlock + 80
22 Vision 0x2ccfbc -[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 340
23 Vision 0x125410 __swift_memcpy112_8 + 4852
24 libswift_Concurrency.dylib 0x5c134 swift::runJobInEstablishedExecutorContext(swift::Job*) + 292
25 libswift_Concurrency.dylib 0x5d5c8 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 156
26 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364
27 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156
28 libsystem_pthread.dylib 0x9d0 _pthread_wqthread + 232
29 libsystem_pthread.dylib 0xaac start_wqthread + 8
We found an issue similar to us - https://developer.apple.com/forums/thread/770771.
But the crash logs are quite different, we believe this warrants further investigation to better understand the root cause and potential mitigation strategies.
Please let us know if any additional information would help diagnose this issue.
I couldn't find information about this in the documentation. Could someone clarify if this API is available and how to access it?
Based on the documentation, it appears that MLTensor can be used to perform tensor operations using the ANE (Apple Neural Engine) by wrapping the tensor operations with withMLTensorComputePolicy with a MLComputePolicy initialized with MLComputeUnits.cpuAndNeuralEngine (it can also be initialized with MLComputeUnits.all to let the OS spread the load between the Neural Engine, GPU and CPU).
However, when using the Instruments app, it appears that the tensor operations never get executed on the Neural Engine.
It would be helpful if someone can guide me on the correct way to ensure that the Nerual Engine is used to perform the tensor operations (not as part of a CoreML model file).
based on this example, I've created a simple code to try it:
import Foundation
import CoreML
print("Starting...")
let semaphore = DispatchSemaphore(value: 0)
Task {
await withMLTensorComputePolicy(.init(MLComputeUnits.cpuAndNeuralEngine)) {
let v1 = MLTensor([1.0, 2.0, 3.0, 4.0])
let v2 = MLTensor([5.0, 6.0, 7.0, 8.0])
let v3 = v1.matmul(v2)
await v3.shapedArray(of: Float.self) // is 70.0
let m1 = MLTensor(shape: [2, 3], scalars: [
1, 2, 3,
4, 5, 6
], scalarType: Float.self)
let m2 = MLTensor(shape: [3, 2], scalars: [
7, 8,
9, 10,
11, 12
], scalarType: Float.self)
let m3 = m1.matmul(m2)
let result = await m3.shapedArray(of: Float.self) // is [[58, 64], [139, 154]]
// Supports broadcasting
let m4 = MLTensor(randomNormal: [3, 1, 1, 4], scalarType: Float.self)
let m5 = MLTensor(randomNormal: [4, 2], scalarType: Float.self)
let m6 = m4.matmul(m5)
print("Done")
return result;
}
semaphore.signal()
}
semaphore.wait()
Here's what I get on the Instruments app:
Notice how the Neural Engine line shows no usage.
Ive run this test on an M1 Max MacBook Pro.
Hello everyone, I have a visual convolutional model and a video that has been decoded into many frames. When I perform inference on each frame in a loop, the speed is a bit slow. So, I started 4 threads, each running inference simultaneously, but I found that the speed is the same as serial inference, every single forward inference is slower. I used the mactop tool to check the GPU utilization, and it was only around 20%. Is this normal? How can I accelerate it?
I was generating models using the code:-
import Foundation
import CreateML
import TabularData
import CoreML
....
func makeTheModel(columntopredict:String,training:DataFrame,colstouse:[String],numberofmodels:Int) -> [MLLinearRegressor] {
var returnmodels = [MLLinearRegressor]()
var result = 0.0
for i in 0...numberofmodels {
let pms = MLLinearRegressor.ModelParameters(validation: .split(strategy: .automatic))
do {
let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict)
returnmodels.append(tm)
}
catch let error as NSError {
print("Error: \(error.localizedDescription)")
}
}
return returnmodels
}
Which worked absolutely fine with Sonoma, but upon upgrading the OS to 15.3.1, it does absolutely nothing.
I get no error messages, I get nothing, the code just pauses. If I look at CPU usage, as soon as it hits the line let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) the CPU usage drops to 0%
What am I doing wrong? Is there a flag I need to set somewhere in Xcode?
This is on an M1 MacBook Pro
Any help would be greatly appreciated
I downloaded the new developer beta and then installed xcode. I did the downloads but I couldn't download the Predictive Code Completion Model. When I try to download it I get the error "The operation couldn’t be completed. (ModelCatalog.CatalogErrors.AssetErrors error 1.)". I am using the M3 Pro model.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Do we know what a safe max token limit is? After some iterating, I have come to believe 4096 might be the limit on device.
Could you help me out by answering any of these questions:
Is 4096 the correct limit?
Do all devices have the same limit?
Will the limit change over time or by device?
The errors I get when going over the limit do not seem to say, hey you are over, so it's just by trial and error that I figure these issues out.
Thanks for the fun new toys.
Regards,
Rob
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hello everybody!
I’m encountering an unexpected guardrailViolation error when using Foundation Models on macOS Beta 3 (Tahoe) with an Apple M2 Pro chip. This issue didn’t occur on Beta 1 or Beta 2 using the same codebase.
Reproduction Context
I’m developing an app that leverages Foundation Models for structured generation, paired with a local database tool. After upgrading to macOS Beta 3, I started receiving this error consistently, despite no changes in the generation logic.
To isolate the issue, I opened the official WWDC sample project from the Adding intelligent app features with generative models and the same guardrailViolation error appeared without any modifications.
Simplified Working Example
I attempted to narrow down the issue by starting with a minimal prompt structure. This basic case works fine:
import Foundation
import Playgrounds
import FoundationModels
@Generable
struct GeneableLandmark {
@Guide(description: "Name of the landmark to visit")
var name: String
}
final class LandmarkSuggestionGenerator {
var landmarkSuggestion: GeneableLandmark.PartiallyGenerated?
private var session: LanguageModelSession
init(){
self.session = LanguageModelSession(
instructions: Instructions {
"""
generate a list of landmarks to visit
"""
}
)
}
func createLandmarkSuggestion(location: String) async throws {
let stream = session.streamResponse(
generating: GeneableLandmark.self,
options: GenerationOptions(sampling: .greedy),
includeSchemaInPrompt: false
) {
"""
Generate a list of landmarks to viist in \(location)
"""
}
for try await partialResponse in stream {
landmarkSuggestion = partialResponse
}
}
}
#Playground {
let generator = LandmarkSuggestionGenerator()
Task {
do {
try await generator.createLandmarkSuggestion(location: "New york")
if let suggestion = generator.landmarkSuggestion {
print("Suggested landmark: \(suggestion)")
} else {
print("No suggestion generated.")
}
} catch {
print("Error generating landmark suggestion: \(error)")
}
}
}
But as soon as I use the Sample ItineraryPlanner:
#Playground {
// Example landmark for demonstration
let exampleLandmark = Landmark(
id: 1,
name: "San Francisco",
continent: "North America",
description: "A vibrant city by the bay known for the Golden Gate Bridge.",
shortDescription: "Iconic Californian city.",
latitude: 37.7749,
longitude: -122.4194,
span: 0.2,
placeID: nil
)
let planner = ItineraryPlanner(landmark: exampleLandmark)
Task {
do {
try await planner.suggestItinerary(dayCount: 3)
if let itinerary = planner.itinerary {
print("Suggested itinerary: \(itinerary)")
} else {
print("No itinerary generated.")
}
} catch {
print("Error generating itinerary: \(error)")
}
}
}
The error pops up:
Multiline
Error generating itinerary:
guardrailViolation(FoundationModels.LanguageModelSession. >GenerationError.Context(debug
Description: "May contain sensitive or unsafe content", >underlyingErrors:
[FoundationModels. LanguageModelSession. Gene >rationError.guardrailViolation(FoundationMo dels. >LanguageModelSession.GenerationError.C ontext (debugDescription: >"May contain unsafe content", underlyingErrors: []))]))
Based on my tests:
The error may not be tied to structure complexity (since more nested structures work)
The issue may stem from the tools or prompt content used inside the ItineraryPlanner
The guardrail sensitivity may have increased or changed in Beta 3, affecting models that worked in earlier betas
Thank you in advance for your help. Let me know if more details or reproducible code samples are needed - I’m happy to provide them.
Best,
Sasha Morozov
Hi,
One can configure the languages of a (VN)RecognizeTextRequest with either:
.automatic: language to be detected
a specific language, say Spanish
If the request is configured with .automatic and successfully detects Spanish, will the results be exactly equivalent compared to a request made with Spanish set as language?
I could not find any information about this, and this is very important for the core architecture of my app.
Thanks!
In an App Playground Xcode project there is no Targets menu in the UI, When I try use the model, it says the model is not in scope. When I did it in a regular project it automatically generated a Swift Class and had no erorrs because it had a target but I see no place to add a target on an App playground.
Is it possible to train an Adaptor for the Foundation Models to produce Generable output? If so what would the response part of the training data need to look like? Presumably, under the hood, the model is outputting JSON (or some other similar structure) that can be decoded to a Generable type. Would the response part of the training data for an Adaptor need to be in that structured format?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Itself been 4-5 days my Image playground has showing the “Downloading Support for Image Playground “
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I have an image based app with albums, except in my app, albums are known as galleries.
When I tried to conform my existing OpenGalleryIntent with @AssistantIntent(schema: .photos.openAlbum), I had to change my existing gallery parameter to be called target in order to fit the predefined shape of this domain.
Previously, my intent was configured to display as “Open Gallery” with the description “Opens the selected Gallery” in the Shortcuts app. After conforming to the photos domain, it displays as “Open Album” with a description “Opens the Provided Album”.
Shortcuts is ignoring my configured title and description now. My code builds, but with the following build warnings:
Parameter argument title of a required Assistant schema intent parameter target should not be overridden
Implementation of the property title of an AppIntent conforming to AssistantSchemaIntent should not be overridden
Implementation of the property description of an AppIntent conforming to AssistantSchemaIntent should not be overridden
Is my only option to change the concept of a Gallery inside of my app into an Album? I don't want to do this... Conceptually, my app aligns well with this domain does, but I didn't consider that conforming to the shape of an AI schema intent would also dictate exactly how it's presented to the user.
FB16283840
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Siri and Voice
Shortcuts
App Intents
Apple Intelligence
I'd love to add a feature based on FoundationModels to the Mac Catalyst version of my iOS app. Unfortunately I get an error when importing FoundationModels: No such module 'FoundationModels'.
Documentation says Mac Catalyst is supported: https://developer.apple.com/documentation/foundationmodels
I can create iOS builds using the FoundationModels framework without issues.
Hope this will be fixed soon!
Config:
Xcode 26.0 beta (17A5241e)
macOS 26.0 Beta (25A5279m)
15-inch, M4, 2025 MacBook Air
Topic:
Machine Learning & AI
SubTopic:
Foundation Models