Home / Competitive Intelligence / Apple’s AI Patent Activities in October 2024

Apple’s AI Patent Activities in October 2024

Overview

  • 20 Publications
  • 7 Patents

New Innovations

1. Efficient Hardware Utilization for AI Tasks

What could be Apple doing?
Apple is developing a neural engine circuit that can perform both convolution and parallel sorting operations on input data using the same hardware components. This involves reusing operation circuits and an accumulator for both tasks, allowing efficient implementation of these operations in the same hardware.

What does this mean?
This innovation could lead to more efficient AI processors by reducing the need for separate hardware components for different tasks. It could result in cost savings, reduced power consumption, and increased processing speed, making AI applications more accessible and efficient.

Related patent: US12106206B2

Advancements

1. Context-Aware User Assistance Without Location Tracking

What could Apple be doing?
Enhancing user experience by predicting actions through sensor data.

What’s new?
The new patent emphasizes personalized experiences by predicting user actions using sensor data without relying on continuous location tracking.

Related Patent: US20240361737A1

2. Seamless Handwritten Text Integration

What could Apple be doing?
Apple is enhancing the integration of handwritten inputs with digital text interaction, improving user experience by distinguishing between text and non-text strokes.

What’s new?
The new patent introduces a machine-learning engine for better stroke disambiguation and text recognition, enabling precise identification of word boundaries and text groupings.

Related Patent: US20240362943A1

3. Visualizing Invisible Environmental Features

What could Apple be doing?
Apple is developing technology to visualize invisible environmental features like WiFi signals and magnetic fields in real-time.

What’s new?
The new patent enhances user interaction by providing an immersive experience, allowing users to see and interact with invisible phenomena accurately in the real world.

Related Patent: US12131533B2

4. Enhanced Neural Processor Architecture for Efficient Computation

What could Apple be doing?
Apple is optimizing neural processors for more efficient neural network computations.

What’s new?
The new patent introduces a specialized planar engine circuit for broadcasting and reshaping data, enabling efficient processing of varied input sizes by compressing and expanding data before computation.

Related Patent: US12124943B2

5. Enhanced Noise Reduction in Touch Data

What could Apple be doing?
Apple is improving touch screen accuracy by using machine learning to reduce noise in touch data.

What’s new?
The new patent specifies the sequential application of GRU and CNN, detailing that the GRU targets noise from the touch screen itself, while the CNN addresses noise from other components like displays and power systems.

Related Patent: US20240345683A1

6. Enhanced Virtual Assistant for Group Chats

What could Apple be doing?
Apple is enhancing virtual assistants to streamline group chat interactions by automating task coordination.

What’s new?
The new patent emphasizes real-time data gathering from web servers and personalized app suggestions, improving task management and user experience.

Related Patent: US20240346282A1

7. Enhanced Digital Assistant Integration with Third-Party Apps

What could Apple be doing?
Apple is enhancing digital assistants to seamlessly integrate with third-party applications, enabling natural language interactions to perform tasks within these apps.

What’s new?
The new patent introduces a method for determining app associations for intents and flows, allowing the assistant to guide user input and execute tasks within third-party apps, enhancing seamless integration.

Related Patent: AU2023204133B2

8. Gesture Recognition Enhancement in Smartwatches

What could Apple be doing?
Apple is improving gesture recognition in smartwatches using sensor data and machine learning.

What’s new?
The new patent introduces a machine learning model that predicts gestures without needing prior biometric data, allowing for initial use without personalization. Over time, it personalizes the model by building a gesture library for the user.

Related Patents: US12118443B2

9. Enhanced Display Strategies for Virtual Objects in Extended Reality

What could Apple be doing?
Apple is improving the precision of virtual object placement in extended reality by adapting display strategies based on the localization accuracy of the user’s device.

What’s new?
The new patent introduces a dynamic adjustment of virtual object positioning, ensuring they do not appear misplaced when localization accuracy is low. It emphasizes preventing floating or misplaced virtual objects by using fallback locations.

Related Patents: US12118685B2

10. Enhancing Machine Learning Model Robustness

What could Apple be doing?
Apple is improving machine learning model robustness by detecting input distribution changes and adjusting models accordingly.

What’s new?
The new patent emphasizes reducing brittleness by comparing training and test sets, calculating morphing weights to prevent bias, and identifying struggling subspaces for potential retraining or label expansion.

Related Patents: US20240338612A1

11. Dynamic Bit Width Optimization in Neural Processors

What could Apple be doing?
Apple is enhancing neural processors to optimize power and bandwidth for machine learning tasks by dynamically adjusting bit width during computations.

What’s new?
The new patent introduces dynamic bit width adjustments specifically for both training and inference, allowing more flexible resource optimization compared to the family member, which focuses on inference only.

Related Patents: US20240338556A1

12. Advanced 3D Point Cloud Encoding for Capsule Networks

What could Apple be doing?
Enhancing 3D data processing using capsule neural networks.

What’s new?
Incorporates hierarchical encoding of objects from points and updates capsule features through multi-view agreement.

Related Patents: US20240331207A1

Read more patents from October


Focus Shifts in Apple’s AI Activities

Key Areas of Innovation

Innovation Area Q3 Q4* Δ
AI-Driven Optimization of Wireless Signal Processing and Localization 23 13 -43.48%
Programmable Neural Processor Circuit for Scalable Machine Learning Operations 16 5 -68.75%
3D Object Recognition and Image Conversion Using Machine Learning 3 0 -100.00%

Note: Q4 marks the latest quarter with an ongoing review.

Portfolio Growth Rate : 25.00%

  • Q3 – 48
  • Q4* – 20

Apple’s AI Patent Activities During Quarter Q4

Overview

Trend of Publication

Month Count
October 20 ██████████████████████████████████████████████████

Top Publishing Jurisdictions

Jurisdictions Count
US 14 ██████████████████████████████████████████████████
WO 2 ███████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░

Where is Apple focused on in this Quarter?

Focus Area Description Potential Technologies Related Patents
AI-Driven Optimization of Wireless Signal Processing and Localization Enhancing user experiences and device functionalities through AI-driven techniques for wireless signal processing and localization. Predictive user assistance, seamless integration of inputs, environmental compensation, virtual representation, avatar optimization, noise mitigation, virtual assistant integration, gesture detection, display strategies, radio resource management, signal strength measurement, crash detection. KR102722772B1, US12131533B2 and more
Programmable Neural Processor Circuit for Scalable Machine Learning Operations Developing specialized neural processors to enhance machine learning operations, focusing on scalability, efficiency, and robustness. Neural network acceleration, input distribution detection, adjustable bit width computations, 3D point cloud encoding. US12124943B2, US20240338612A1 and more

Explore all focus areas

Quick Insights

New Patent Families

Publication Number What is this about?
KR102722772B1 Compensating for color shifts in camera modules over time due to environmental stresses. The technique involves using time-varying color correction models instead of fixed calibration values. The models predict and apply color adjustments based on factors like solar radiation, humidity, and heat exposure. They compensate for optical changes in components like lenses and filters that can degrade image quality over time. By dynamically adjusting color correction over the life of the camera, it aims to prevent degraded color accuracy as components degrade.
WO2024211552A1 AI/ML based radio resource management for wireless devices like smartphones that enables efficient beamforming and link selection using sub-sampling and prediction techniques. The method involves using AI/ML to learn mappings between low-resolution beam subsets and full-resolution beam maps. By selecting subsets based on SSB coordinates and predicting optimal beams from those subsamples, it reduces the search space for high-resolution beamforming. The AI/ML model generates course and prediction maps to select the best beam pair from the subsamples for communication.
WO2024205838A1 Efficiently measuring signal strength in a wireless network with many antenna beams using artificial intelligence to interpolate measurements. In a wireless network with many antenna beams, it’s impractical for a device to measure signal strength on all beams. The solution is to have the network configure a subset of beams for measurement, and use AI to interpolate full coverage from the subset. This involves training an AI model on partial measurements, and then feeding it the subset results to predict full coverage.

Highly Cited Families

Publication Number What is this about? Cites Cited By
AU2023204133B2 Integrating applications with digital assistants to enable natural language interaction with tasks and functions in third-party apps. The method involves determining app associations for intents, flows for apps, and providing intents to apps to execute tasks. It allows a user to say things like “Schedule a meeting tomorrow at 2pm” and have the assistant launch the calendar app, populate the details, and schedule the meeting. The assistant can also receive app requests and guide user input to complete tasks. This enables seamless app integration through the assistant interface. 126 Universal Electronics, Google and 3 more
US20240346282A1 Using virtual assistants to help group chat participants with tasks during communication sessions. The virtual assistant gathers requested data from web servers and coordinates tasks among the group. For example, it can recommend meeting times by checking calendars, or suggest payment apps by finding what each user has. 122 Microsoft, Facebook and 3 more
US20240338612A1 Detecting and responding to changes in input distributions for machine learning models to improve robustness and reduce brittleness. The technique involves comparing characteristics of a training input set to a separate test set to check if they sufficiently correspond. If not, morphing weights are calculated to adjust the training set to match. This prevents bias and overfitting. It also determines temporal patterns of morphing weights across multiple input sets to estimate how model performance changes over time. This allows identifying subspaces where the model is struggling and whether retraining or label expansion is needed. 14 伊姆西Ip控股有限责任公司, Oracle International and 3 more
US20240361737A1 Predictive user assistance based on sensor data to provide personalized experiences without relying solely on location tracking. The method involves recording sensor readings at the time of an event, like a user action, and associating it with a label. Later, when matching new sensor readings, if a match is found, it predicts the user is about to perform the action or recognizes they have done it again. This allows context-aware user assistance without continuously tracking locations. 8 小米科技有限责任公司, 구글 엘엘씨 and 3 more
US20240362943A1 Seamlessly integrating handwritten text inputs with text-interaction options that are typically provided for typed keyboard input. The systems and methods disclosed herein include a machine-learning engine for disambiguation of which strokes in a canvas of handwritten input strokes represent text, and which are not text (e.g., drawings). Text/non-text labels are stored at the stroke level, with which grouping can be performed to construct lines of text. Text recognition can be run on the strokes of each line of text. Following this text recognition, word boundaries can be identified in the stroke space. This knowledge of the word boundaries in stroke space facilitates identification of multiple granularities of groups of text (e.g., words, phrases, lines, paragraphs, etc.), and selection and/or interaction therewith. 7 삼성전자주식회사, 京东方科技集团股份有限公司 and 3 more

New Addition to Big Families

Publication Number What is this about? Family Size
AU2023204133B2 Integrating applications with digital assistants to enable natural language interaction with tasks and functions in third-party apps. The method involves determining app associations for intents, flows for apps, and providing intents to apps to execute tasks. It allows a user to say things like “Schedule a meeting tomorrow at 2pm” and have the assistant launch the calendar app, populate the details, and schedule the meeting. The assistant can also receive app requests and guide user input to complete tasks. This enables seamless app integration through the assistant interface. 31
US20240346282A1 Using virtual assistants to help group chat participants with tasks during communication sessions. The virtual assistant gathers requested data from web servers and coordinates tasks among the group. For example, it can recommend meeting times by checking calendars, or suggest payment apps by finding what each user has. 18
US12118443B2 Detecting user gestures using outputs from sensors on an electronic device like a smartwatch. The device receives signals from both biosignal sensors (e.g., PPG) and non-biosignal sensors. It uses a machine learning model trained on sensor data from a general population to predict the user’s gesture based on the sensor outputs. This allows gesture recognition without needing a priori knowledge of the user’s unique biometrics. The model can be further personalized over time as the device builds a library of known gestures for the user. 6
US12118685B2 Selecting display strategies for virtual objects in extended reality based on the accuracy of localizing the user’s device in the physical environment. If localization accuracy is high, objects are positioned precisely relative to real-world objects. If accuracy is low, objects are positioned less precisely or in a fallback location. This prevents virtual objects appearing floating or misplaced when device localization is inaccurate. The display strategy is determined based on the localization accuracy level. 6
US20240361737A1 Predictive user assistance based on sensor data to provide personalized experiences without relying solely on location tracking. The method involves recording sensor readings at the time of an event, like a user action, and associating it with a label. Later, when matching new sensor readings, if a match is found, it predicts the user is about to perform the action or recognizes they have done it again. This allows context-aware user assistance without continuously tracking locations. 5

Issued Patents in this Quarter

Publication Number What is this about?
KR102722772B1 Compensating for color shifts in camera modules over time due to environmental stresses. The technique involves using time-varying color correction models instead of fixed calibration values. The models predict and apply color adjustments based on factors like solar radiation, humidity, and heat exposure. They compensate for optical changes in components like lenses and filters that can degrade image quality over time. By dynamically adjusting color correction over the life of the camera, it aims to prevent degraded color accuracy as components degrade.
US12131533B2 Displaying virtual representations of non-visible features of a physical environment, like WiFi signals or magnetic fields, in a way that accurately shows their location in the real world. The system uses sensors to detect these features, obtains a depth map of the environment, identifies a visual context, and overlays a visualization of the non-visible feature at the correct location in the physical environment. This allows users to see and interact with invisible phenomena in a realistic and immersive way.
US12125130B1 Optimizing avatar models in virtual and augmented reality applications to improve perceived realism and immersion for viewers. The optimization is based on perceptual, physiological, and direct-report data from users experiencing the avatar. The avatar models are compressed using neural networks and transmitted. The compression network is trained to minimize a cost function that includes a perceptual quality metric. When a user views the avatar, their response data is obtained and used to update the perceptual metric. The compression network then re-renders the avatar using the updated metric to optimize perceived realism for that user.

and 4 more.