Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

How does the extract method from ImagePlaygroundConcept work?
I’m building an app that generates images based on text input from a specific text field. However, I’m encountering a problem: For short prompts like "a cat and a dog", the entire string is sent to the Image Playground, even when I use the extracted method. For longer inputs, the behavior is inconsistent. Sometimes it extracts keywords correctly, but other times it doesn’t extract anything at all. Since my app relies on generating images based on the extracted keywords, this inconsistency negatively impacts the user experience in my app. How can I make sure that keywords are always extracted from the input string? Button("Generate", systemImage: "apple.intelligence") { isPresented = true } .imagePlaygroundSheet(isPresented: $isPresented, concepts: [ImagePlaygroundConcept.extracted(from: text, title: textTitle)]) { url in imageURL = url }
1
0
544
Feb ’25
How to pass data to FoundationModels with a stable identifier
For example: I have a list of to-dos, each with a unique id (a GUID). I want to feed them to the LLM model and have the model rewrite the items so they start with an action verb. I'd like to get them back and identify which rewritten item corresponds to which original item. I obviously can't compare the text, as it has changed. I've tried passing the original GUIDs in with each to-do, but the extra GUID characters pollutes the input and confuses the model. I've tried numbering them in order and adding an originalSortOrder field to my generable type, but it doesn't work reliably. Any suggestions? I could do them one at a time, but I also have a use case where I'm asking for them to be organized in sections, and while I've instructed the model not to rename anything, it still happens. It's just all very nondeterministic.
2
0
289
Jun ’25
Cannot find type ToolOutput in scope
My sample app has been working with the following code: func call(arguments: Arguments) async throws -> ToolOutput { var temp:Int switch arguments.city { case .singapore: temp = Int.random(in: 30..<40) case .china: temp = Int.random(in: 10..<30) } let content = GeneratedContent(temp) let output = ToolOutput(content) return output } However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
3
0
245
Aug ’25
OpenIntent not executed with Visual Intelligence
I'm building a new feature with Visual Intelligence framework. My implementation for IndexedEntity and IntentValueQuery worked as expected and I can see a list of objects in visual search result. However, my OpenIntent doesn't work. When I tap on the object, I got a message on screen "Sorry somethinf went wrong ...". and the breakpoint in perform() is never triggered. Things I've tried: I added @MainActor before perform(), this didn't change anything I set static let openAppWhenRun: Bool = true and static var supportedModes: IntentModes = [.foreground(.immediate)], still nothing I created a different intent for the see more button at the end of feed. This AppIntent with schema: .visualIntelligence.semanticContentSearch worked, perform() is executed
10
0
363
Aug ’25
Real Time Text detection using iOS18 RecognizeTextRequest from video buffer returns gibberish
Hey Devs, I'm trying to create my own Real Time Text detection like this Apple project. https://developer.apple.com/documentation/vision/extracting-phone-numbers-from-text-in-images I want to use the new iOS18 RecognizeTextRequest instead of the old VNRecognizeTextRequest in my SwiftUI project. This is my delegate code with the camera setup. I removed region of interest for debugging but I'm trying to scan English words in books. The idea is to get one word in the ROI in the future. But I can't even get proper words so testing without ROI incase my math is wrong. @Observable class CameraManager: NSObject, AVCapturePhotoCaptureDelegate ... override init() { super.init() setUpVisionRequest() } private func setUpVisionRequest() { textRequest = RecognizeTextRequest(.revision3) } ... func setup() -> Bool { captureSession.beginConfiguration() guard let captureDevice = AVCaptureDevice.default( .builtInWideAngleCamera, for: .video, position: .back) else { return false } self.captureDevice = captureDevice guard let deviceInput = try? AVCaptureDeviceInput(device: captureDevice) else { return false } /// Check whether the session can add input. guard captureSession.canAddInput(deviceInput) else { print("Unable to add device input to the capture session.") return false } /// Add the input and output to session captureSession.addInput(deviceInput) /// Configure the video data output videoDataOutput.setSampleBufferDelegate( self, queue: videoDataOutputQueue) if captureSession.canAddOutput(videoDataOutput) { captureSession.addOutput(videoDataOutput) videoDataOutput.connection(with: .video)? .preferredVideoStabilizationMode = .off } else { return false } // Set zoom and autofocus to help focus on very small text do { try captureDevice.lockForConfiguration() captureDevice.videoZoomFactor = 2 captureDevice.autoFocusRangeRestriction = .near captureDevice.unlockForConfiguration() } catch { print("Could not set zoom level due to error: \(error)") return false } captureSession.commitConfiguration() // potential issue with background vs dispatchqueue ?? Task(priority: .background) { captureSession.startRunning() } return true } } // Issue here ??? extension CameraManager: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput( _ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection ) { guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } Task { textRequest.recognitionLevel = .fast textRequest.recognitionLanguages = [Locale.Language(identifier: "en-US")] do { let observations = try await textRequest.perform(on: pixelBuffer) for observation in observations { let recognizedText = observation.topCandidates(1).first print("recognized text \(recognizedText)") } } catch { print("Recognition error: \(error.localizedDescription)") } } } } The results I get look like this ( full page of English from a any book) recognized text Optional(RecognizedText(string: e bnUI W4, confidence: 0.5)) recognized text Optional(RecognizedText(string: ?'U, confidence: 0.3)) recognized text Optional(RecognizedText(string: traQt4, confidence: 0.3)) recognized text Optional(RecognizedText(string: li, confidence: 0.3)) recognized text Optional(RecognizedText(string: 15,1,#, confidence: 0.3)) recognized text Optional(RecognizedText(string: jllÈ, confidence: 0.3)) recognized text Optional(RecognizedText(string: vtrll, confidence: 0.3)) recognized text Optional(RecognizedText(string: 5,1,: 11, confidence: 0.5)) recognized text Optional(RecognizedText(string: 1141, confidence: 0.3)) recognized text Optional(RecognizedText(string: jllll ljiiilij41, confidence: 0.3)) recognized text Optional(RecognizedText(string: 2f4, confidence: 0.3)) recognized text Optional(RecognizedText(string: ktril, confidence: 0.3)) recognized text Optional(RecognizedText(string: ¥LLI, confidence: 0.3)) recognized text Optional(RecognizedText(string: 11[Itl,, confidence: 0.3)) recognized text Optional(RecognizedText(string: 'rtlÈ131, confidence: 0.3)) Even with ROI set to a specific rectangle Normalized to Vision, I get the same results with single characters returning gibberish. Any help would be amazing thank you. Am I using the buffer right ? Am I using the new perform(on: CVPixelBuffer) right ? Maybe I didn't set up my camera properly? I can provide code
1
0
341
Jul ’25
Initializing session with transcript ignores tools
When I initialize a session with an existing transcript using this initializer: public convenience init(model: SystemLanguageModel = .default, guardrails: LanguageModelSession.Guardrails = .default, tools: [any Tool] = [], transcript: Transcript) The tools get ignored. I noticed that when doing that, the model never use the tools. When inspecting the transcript, I can see that the instruction entry does not have any tools available to it. I tried this for both transcripts that already include an instruction entry and ones that don't - both yielding the same result.. Is this the intended behavior / am I missing something here?
1
0
210
Jul ’25
Stream response
With respond() methods, the foundation model works well enough. With streamResponse() methods, the responses are very repetitive, verbose, and messy. My app with foundation model uses more than 500 MB memory on an iPad Pro when running from Xcode. Devices supporting Apple Intelligence have at least 8GB memory. Should Apple use a bigger model (using 3 ~ 4 GB memory) for better stream responses?
2
0
266
Jul ’25
LanguageModelStream and collecting the final output
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content. At the end, I want to pass the final version (not partially generated) to another function. I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable. Am I missing some functionality in the APIs?
4
0
586
Jul ’25
Apple's Illusion of Thinking paper and Path to Real AI Reasoning
Hey everyone I'm Manish Mehta, field CTO at Centific. I recently read Apple's white paper, The Illusion of Thinking and it got me thinking about the current state of AI reasoning. Who here has read it? The paper highlights how LLMs often rely on pattern recognition rather than genuine understanding. When faced with complex tasks, their performance can degrade significantly. I was just thinking that to move beyond this problem, we need to explore approaches that combines Deeper Reasoning Architectures for true cognitive capability with Deep Human Partnership to guide AI toward better judgment and understanding. The first part means fundamentally rewiring AI to reason. This involves advancing deeper architectures like World Models, which can build internal simulations to understand real-world scenarios , and Neurosymbolic systems, which combines neural networks with symbolic reasoning for deeper self-verification. Additionally, we need to look at deep human partnership and scalable oversight. An AI cannot learn certain things from data alone, it lacks the real-world judgment an AI will never have. Among other things, deep domain expert human partners are needed to instill this wisdom , validate the AI's entire reasoning process , build its ethical guardrails , and act as skilled adversaries to find hidden flaws before they can cause harm. What do you all think? Is this focus on a deeper partnership between advanced AI reasoning and deep human judgment the right path forward? Agree? Disagree? Thanks
2
0
288
Jul ’25
Memory Attribution for Foundation Models in iOS 26
Hi, I’m developing an app targeting iOS 26, using the new FoundationModels framework to perform on-device LLM inference. I’m currently testing memory usage. Does the memory used by FoundationModels—including model weights, KV cache, and any inference-related buffers—count toward my app’s Jetsam memory limit, or is any of it managed separately by the system? I may need to run two concurrent inferences, each with a 4096-token context window. Is this explicitly supported or allowed by FoundationModels on iOS 26? Would this significantly increase the risk of memory-based termination? Thanks in advance for any clarification. Thanks.
1
0
414
Jul ’25
What's the best way to load adapters to try?
I'm new to Swift and was hoping the Playground would support loading adaptors. When I tried, I got a permissions error - thinking it's because it's not in the project and Playgrounds don't like going outside the project? A tutorial and some sample code would be helpful. Also some benchmarks on how long it's expected to take. Selfishly I'm on an M2 Mac Mini.
1
0
296
Jul ’25
Crash inside of Vision predictWithCVPixelBuffer - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
Hello, We have been encountering a persistent crash in our application, which is deployed exclusively on iPad devices. The crash occurs in the following code block: let requestHandler = ImageRequestHandler(paddedImage) var request = CoreMLRequest(model: model) request.cropAndScaleAction = .scaleToFit let results = try await requestHandler.perform(request) The client using this code is wrapped inside an actor, following Swift concurrency principles. The issue has been consistently reproduced across multiple iPadOS versions, including: iPad OS - 18.4.0 iPad OS - 18.4.1 iPad OS - 18.5.0 This is the crash log - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer 0 libobjc.A.dylib 0x7b98 objc_retain + 16 1 libobjc.A.dylib 0x7b98 objc_retain_x0 + 16 2 libobjc.A.dylib 0xbf18 objc_getProperty + 100 3 Vision 0x326300 -[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148 4 Vision 0x3273b0 -[VNCoreMLTransformer processRegionOfInterest:croppedPixelBuffer:options:qosClass:warningRecorder:error:progressHandler:] + 748 5 Vision 0x2ccdcc __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_5 + 132 6 Vision 0x14600 VNExecuteBlock + 80 7 Vision 0x14580 __76+[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:]_block_invoke + 56 8 libdispatch.dylib 0x6c98 _dispatch_block_sync_invoke + 240 9 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16 10 libdispatch.dylib 0x11728 _dispatch_lane_barrier_sync_invoke_and_complete + 56 11 libdispatch.dylib 0x7fac _dispatch_sync_block_with_privdata + 452 12 Vision 0x14110 -[VNControlledCapacityTasksQueue dispatchSyncByPreservingQueueCapacity:] + 60 13 Vision 0x13ffc +[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:] + 324 14 Vision 0x2ccc80 __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_4 + 336 15 Vision 0x14600 VNExecuteBlock + 80 16 Vision 0x2cc98c __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_3 + 256 17 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16 18 libdispatch.dylib 0x6ab0 _dispatch_block_invoke_direct + 284 19 Vision 0x2cc454 -[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 632 20 Vision 0x2cd14c __111-[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke + 124 21 Vision 0x14600 VNExecuteBlock + 80 22 Vision 0x2ccfbc -[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 340 23 Vision 0x125410 __swift_memcpy112_8 + 4852 24 libswift_Concurrency.dylib 0x5c134 swift::runJobInEstablishedExecutorContext(swift::Job*) + 292 25 libswift_Concurrency.dylib 0x5d5c8 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 156 26 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364 27 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156 28 libsystem_pthread.dylib 0x9d0 _pthread_wqthread + 232 29 libsystem_pthread.dylib 0xaac start_wqthread + 8 We found an issue similar to us - https://developer.apple.com/forums/thread/770771. But the crash logs are quite different, we believe this warrants further investigation to better understand the root cause and potential mitigation strategies. Please let us know if any additional information would help diagnose this issue.
3
0
363
Jul ’25
Localizing prompts that has string interpolated generable objects
I'm working on localizing my prompts to support multiple languages, and in some cases my prompts has String interpolated Generable objects. for example: "Given the following workout routine: \(routine), suggest one additional exercise to complement it." In the Strings dictionary, I'm only able to select String, Int or Double parameters using %@ and %lld. Has anyone found a way to accomplish this?
1
0
364
Jul ’25
Easy way to implement facial recognition
Hi everyone😊, I want to implement facial recognition into my app. I was planning to use createML's image classification, but there seams to be a lot of hassle to implement (the JSON file etc.). Are there some other easy to implement options that don't involve advanced coding. Thanks, Oliver
2
0
591
Feb ’25
Inform iOS about AppShortcutsProvider
I've been following along with "App Shortcuts" development but cannot get Siri to run my Intent. The intent on its own works in Shortcuts, along with a couple others that aren't in the AppShortcutsProvder structure. I keep getting the following two errors, but cannot figure out why this is occurring with documentation or other forum posts. No ConnectionContext found for 12909953344 Attempted to fetch App Shortcuts, but couldn't find the AppShortcutsProvider. Here are the relevant snippets of code - (1) The AppIntent definition struct SetBrightnessIntent: AppIntent { static var title = LocalizedStringResource("Set Brightness") static var description = IntentDescription("Set Glass Display Brightness") @Parameter(title: "Level") var level: Int? static var parameterSummary: some ParameterSummary { Summary("Set Brightness to \(\.$level)%") } func perform() async throws -> some IntentResult { guard let level = level else { throw $level.needsValueError("Please provide a brightness value") } if level > 100 || level <= 0 { throw $level.needsValueError("Brightness must be between 1 and 100") } // do stuff with level return .result() } } (2) The AppShortcutsProvider (defined in my iOS app target, there are no other targets) struct MyAppShortcuts: AppShortcutsProvider { static var shortcutTileColor: ShortcutTileColor = .grayBlue @AppShortcutsBuilder static var appShortcuts: [AppShortcut] { AppShortcut( intent: SetBrightnessIntent(), phrases: [ "set \(.applicationName) brightness to \(\.$level)", "set \(.applicationName) brightness to \(\.$level) percent" ], shortTitle: LocalizedStringResource("Set Glass Brightness"), systemImageName: "sun.max" ) } } Does anything here look wrong? Is there some magical key that I need to specify in Info.plist to get Siri to recognize the AppShortcutsProvider? On Xcode 16.2 and iOS 18.2 (non-beta).
5
0
1.3k
Jan ’25
Foundation Model Always modelNotReady
I'm testing Foundation Model on my iPad Pro (5th gen) iOS 26. Up until late this morning, I can no longer load the SystemLanguageModel.default. I'm not doing anything interesting, something as basic as this is only going to unavailable, specifically I get unavailable reason: modelNotReady. let model = SystemLanguageModel.default ... switch model.availability { case .available: print("LM available") case .unavailable(let reason): print("unavailable reason: ", String(describing: reason)) } I also ran the FoundationModelsTripPlanner app, same thing. It was working yesterday, I have not modified that project either. Why is the Model not ready? How do I fix this? Yes, I tried restarting both my laptop and iPad, no luck.
3
0
277
Jul ’25