On macOS Tahoe26.0, iOS 26.0 (23A5287g) not emulator, Xcode 26.0 beta 3 (17A5276g)
Follow this tutorial Testing your asset packs locally The start the test server command I use this command line to start the test server:xcrun ba-serve --host 192.168.0.109 test.aar The terminal showThe content displayed on the terminal is: Loading asset packs…
Loading the asset pack at “test.aar”…
Listening on port 63125…… Choose an identity in the panel to continue. Listening on port 63125…
running the project, Xcode reports an error:Download failed: Could not connect to the server. I use iPhone safari visit this website: https://192.168.0.109:63125, on the page display "Hello, world!"
There are too few error messages in both of the above questions. I have no idea what the specific reasons are.I hope someone can offer some guidance. Best Regards.
{
"assetPackID": "testVideoAssetPack",
"downloadPolicy": {
"prefetch": {
"installationEventTypes": ["firstInstallation", "subsequentUpdate"]
}
},
"fileSelectors": [
{
"file": "video/test.mp4"
}
],
"platforms": [
"iOS"
]
}
this is my Manifest.json
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I'm trying to write a model with PyTorch and convert it to CoreML.
I wrote another models and that works succesfully, even the one that gave the problem is, but I can't visualize it with XCode to know where is running.
The error that appear is:
There was a problem decoding this Core ML document
validator error: unable to open file for read
Anyone knows why is this happening?
Thanks a lot,
Álvaro Corrochano
Topic:
Machine Learning & AI
SubTopic:
Core ML
When context window size exceeded, this error is not called (instead another error has shown up) to handle new session.
LanguageModelSession.GenerationError.exceededContextWindowSize
Or am I doing things wrong?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is there anywhere we can reference error codes? I'm getting this error: "The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 4.)" and I have no idea of what it means or what to attempt to fix.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Machine Learning
Create ML
Apple Intelligence
I’m working on generating images using Image Playground. The code works fine for other styles but fails when using an external provider. I don’t see any other requirements mentioned in the documentation. Has anyone else encountered a similar issue?
Here’s the relevant code snippet:
https://developer.apple.com/documentation/imageplayground/imageplaygroundstyle/externalprovider?changes=_2
The error message is also not very helpful. It simply states that the creation failed.
Note: I have enabled ChatGPT Plus, and the image generation using ChatGPT styles works fine when using the Playground app.
do {
let creator = try await ImageCreator()
let concept = ImagePlaygroundConcept.text("Love")
let images = creator.images(for: [concept], style: .externalProvider, limit: 1)
for try await image in images {
// Handle image
break
}
} catch {
// Handle error
}
I’m using the iOS 26 RC, and when I print creator.availableStyles, it doesn’t display the external Provider.
[ImagePlayground.ImagePlaygroundStyle(id: "animation", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "emoji", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "illustration", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "sketch", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "messages-background", _representationInfo: nil)]
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hello,
My app fully relies on the new Foundation Models. Since Foundation Models require Apple Intelligence, I want to ensure that only devices capable of running Apple Intelligence can install my app.
When checking the UIRequiredDeviceCapabilities property for a suitable value, I found that iphone-performance-gaming-tier seems the closest match. Based on my research:
On iPhone, this effectively limits installation to iPhone 15 Pro or later.
On iPad, it ensures M1 or newer devices.
This exactly matches the hardware requirements for Apple Intelligence.
However, after setting iphone-performance-gaming-tier, I noticed that on iPad, Game Mode (Game Overlay) is automatically activated, and my app is treated as a game.
My questions are:
Is there a more appropriate UIRequiredDeviceCapabilities value that would enforce the same Apple Intelligence hardware requirements without triggering Game Mode?
If not, is there another way to restrict installation to devices meeting Apple Intelligence requirements?
Is there a way to prevent Game Mode from appearing for my app while still using this capability restriction?
Thanks in advance for your help.
Following WWDC24 video "Discover Swift enhancements in the Vision framework" recommendations (cfr video at 10'41"), I used the following code to perform multiple new iOS 18 `RecognizedTextRequest' in parallel.
Problem: if more than 2 request are run in parallel, the request will hang, leaving the app in a state where no more requests can be started. -> deadlock
I tried other ways to run the requests, but no matter the method employed, or what device I use: no more than 2 requests can ever be run in parallel.
func triggerDeadlock() {}
try await withThrowingTaskGroup(of: Void.self) { group in
// See: WWDC 2024 Discover Siwft enhancements in the Vision framework at 10:41
// ############## THIS IS KEY
let maxOCRTasks = 5 // On a real-device, if more than 2 RecognizeTextRequest are launched in parallel using tasks, the request hangs
// ############## THIS IS KEY
for idx in 0..<maxOCRTasks {
let url = ... // URL to some image
group.addTask {
// Perform OCR
let _ = await performOCRRequest(on: url: url)
}
}
var nextIndex = maxOCRTasks
for try await _ in group { // Wait for the result of the next child task that finished
if nextIndex < pageCount {
group.addTask {
let url = ... // URL to some image
// Perform OCR
let _ = await performOCRRequest(on: url: url)
}
nextIndex += 1
}
}
}
}
// MARK: - ASYNC/AWAIT version with iOS 18
@available(iOS 18, *)
func performOCRRequest(on url: URL) async throws -> [RecognizedText] {
// Create request
var request = RecognizeTextRequest() // Single request: no need for ImageRequestHandler
// Configure request
request.recognitionLevel = .accurate
request.automaticallyDetectsLanguage = true
request.usesLanguageCorrection = true
request.minimumTextHeightFraction = 0.016
// Perform request
let textObservations: [RecognizedTextObservation] = try await request.perform(on: url)
// Convert [RecognizedTextObservation] to [RecognizedText]
return textObservations.compactMap { observation in
observation.topCandidates(1).first
}
}
I also found this Swift forums post mentioning something very similar.
I also opened a feedback: FB17240843
I have a question. In China, long pressing a picture in the album can segment the target. Is this model a local model? Is there any information? Can developers use it?
In this thread, I asked about adding parameters to App Shortcuts. The conclusion that I've drawn so far is that for App Shortcuts, there cannot be any parameters in the prompt, otherwise the system cannot find the AppShortcutsProvider. While this is fine for Shortcuts and non-voice interaction, I'd like to find a way to add parameters to the prompt. Here is the scenario:
My app controls a device that displays some content on "pages." The pages are defined in an AppEnum, which I use for Shortcuts integration via App Intents. The App Intent functions as expected, and is able to change the page based on the user selection within Shortcuts (or prompted if using the App Shortcut). What I'd like to do is allow the user to be able to say "Siri, open with ."
So far, The closest I've come to understanding how this works is through the .intentsdefinition file you can create (and SiriKit in general), however the part that really confused me there is a button in the File Editor that says "Convert to App Intent." To me, this means that I should be able to use the app intent I've already authored and hook that into Siri, rather than making an entirely new function/code-block that does exactly the same thing. Ideally, that's what I want to do.
What's the right way to define this behavior?
p.s. If I had to pick an intent schema in the context of AssistantSchemas, I'd say it's closest to the "Open File" one, if that helps. I'd ultimately like to make the "pages" user-customizable so in the long run, that would be what I'd do.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Siri and Voice
SiriKit
App Intents
Apple Intelligence
Hi friends,
I have just found that the inference speed dropped to only 1/10 of the original model.
Had anyone encountered this?
Thank you.
Topic:
Machine Learning & AI
SubTopic:
Core ML
Do we know what a safe max token limit is? After some iterating, I have come to believe 4096 might be the limit on device.
Could you help me out by answering any of these questions:
Is 4096 the correct limit?
Do all devices have the same limit?
Will the limit change over time or by device?
The errors I get when going over the limit do not seem to say, hey you are over, so it's just by trial and error that I figure these issues out.
Thanks for the fun new toys.
Regards,
Rob
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
Hi, DataScannerViewController does't recognize currencies less than 1.00 (e.g. 0.59 USD, 0.99 EUR, etc.). Why? How to solve the problem?
This feature is not described in Apple documentation, is there a solution?
This is my code:
func makeUIViewController(context: Context) -> DataScannerViewController {
let dataScanner = DataScannerViewController(recognizedDataTypes: [ .text(textContentType: .currency)])
return dataScanner
}
Does CoreML object detection only support AABB (Axis-Aligned Bounding Boxes) or also OBB (Oriented Bounded Boxes)? If not, any way to do it using Apple frameworks?
Topic:
Machine Learning & AI
SubTopic:
General
I have exported a Pytorch model into a CoreML mlpackage file and imported the model file into my iOS project. The model is a Music Source Separation model - running prediction on audio-spectrogram blocks and returning separated audio source spectrograms.
Model produces correct results vs. desktop+GPU+Python but the inference on iPhone 15 Pro Max is really, really slow. Using Xcode model Performance tool I can see that the inference isn't automatically managed between compute units - all of it runs on CPU. The Performance tool notation hints all that ops should be supported by both the GPU and Neural Engine.
One thing to note, that when initializing the model with MLModelConfiguration option .cpuAndGPU or .cpuAndNeuralEngine there is an error in Xcode console:
`Error(s) occurred compiling MIL to BNNS graph:
[CreateBnnsGraphProgramFromMIL]: Failed to determine convolution kernel at location at /private/var/containers/Bundle/Application/2E3C4AFF-1FA4-4C95-AAE4-ECEBC0FB0BF9/mymss.app/mymss.mlmodelc/model.mil:2453:12
@ CreateBnnsGraphProgramFromMIL`
Before going back hammering the model in Python, are there any tips/strategies I could try in CoreMLTools export phase or in configuring the model for prediction on iOS?
My export toolchain is currently Linux with CoreMLTools v8.1, export target iOS16.
We have suddenly encountered a serious issue: our local ML models are no longer being decrypted.
Everything was set up according to the guide at https://developer.apple.com/documentation/coreml/generating-a-model-encryption-key and had been working in production, but yesterday we started receiving the following error:
Error Domain=com.apple.CoreML Code=8 "Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID." UserInfo={NSLocalizedDescription=Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID.}
We haven’t changed anything in our code. This started spontaneously affecting users of the release version as of yesterday. It also no longer works locally — we receive the same error at the moment the autogenerated function is called:
class func load(configuration: MLModelConfiguration = MLModelConfiguration(), completionHandler handler: @escaping (Swift.Result<ZingPDModel, Error>) -> Void)
I assume that I can generate a new key through Xcode, integrate it in place of the old one, and it might start working again. However, this won’t affect existing users until they update the app.
Could the issue be on Apple’s infrastructure side?
Topic:
Machine Learning & AI
SubTopic:
Core ML
I get the following dyld error on an iPad Pro with Xcode 26 beta 4:
Symbol not found: _$s16FoundationModels20LanguageModelSessionC7prewarm12promptPrefixyAA6PromptVSg_tF
Any advice?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi,
One can configure the languages of a (VN)RecognizeTextRequest with either:
.automatic: language to be detected
a specific language, say Spanish
If the request is configured with .automatic and successfully detects Spanish, will the results be exactly equivalent compared to a request made with Spanish set as language?
I could not find any information about this, and this is very important for the core architecture of my app.
Thanks!
I have a mac (M4, MacBook Pro) running Tahoe 26.0 beta. I am running Xcode beta.
I can run code that uses the LLM in a #Preview { }.
But when I try to run the same code in the simulator, I get the 'device not ready' error and I see the following in the Settings app.
Is there anything I can do to get the simulator to past this point and allowing me to test on it with Apple's LLM?
I'm working on my Swift Student Challenge submission and developing a Vision framework-based image classifier. I want to ensure I'm following best practices for training data and follow to guidelines for what images I use to train my image classifier.
What types of images can I use for training my model?
Are there specific image databases or resources recommended by Apple that are safe to use for Swift Student Challenge submissions?
Currently considering images used from wikipedia, and my own images